global convergence and variance reduction
Global Convergence and Variance Reduction for a Class of Nonconvex-Nonconcave Minimax Problems
Nonconvex minimax problems appear frequently in emerging machine learning applications, such as generative adversarial networks and adversarial learning. Simple algorithms such as the gradient descent ascent (GDA) are the common practice for solving these nonconvex games and receive lots of empirical success. Yet, it is known that these vanilla GDA algorithms with constant stepsize can potentially diverge even in the convex setting. In this work, we show that for a subclass of nonconvex-nonconcave objectives satisfying a so-called two-sided Polyak-{\L}ojasiewicz inequality, the alternating gradient descent ascent (AGDA) algorithm converges globally at a linear rate and the stochastic AGDA achieves a sublinear rate. We further develop a variance reduced algorithm that attains a provably faster rate than AGDA when the problem has the finite-sum structure.
Review for NeurIPS paper: Global Convergence and Variance Reduction for a Class of Nonconvex-Nonconcave Minimax Problems
Weaknesses: Though there are some merits of the paper, here are a bunch of major problems of the submission: 1. PL condition is a very strong assumption. Although it does not require convexity-concavity, it is a global condition, which roughly requires similar properties of strong convexity-concavity. I agree there are some applications of min-max problems under PL condition, as mentioned in the paper, but the applications are extremely limited, and I am not sure they are important applications to ML community. In general, nonconvex-nonconcave min-max problems won't satisfy PL condition. To this extend, the title of the paper is a bit misleading, and it should mention PL condition explicitly.
Review for NeurIPS paper: Global Convergence and Variance Reduction for a Class of Nonconvex-Nonconcave Minimax Problems
This paper studies AGDA/Stoc-AGDA for minimax problems that may not be nonconvex-nonconcave but obey the two-sides Polyak-Łojasiewicz (PL), Moreover, this paper proposes a variance reduction version of AGDA and achieves better complexity results. The reviewers thought the problem setting was interesting and relevant to Neurips but also had a variety of concerns. These concerns were partially mitigated based on the response but other concerns remained. The reviewers had a spirited and comprehensive technical discussion about the merits of this paper. Two reviewers raised their score R4 - 4-5 and R2 4- 7 while one reviewer slightly lowered their score 8- 7. Based on the reviews, response, discussion and my own reading the main pros and cons of this paper are as follows.
Global Convergence and Variance Reduction for a Class of Nonconvex-Nonconcave Minimax Problems
Nonconvex minimax problems appear frequently in emerging machine learning applications, such as generative adversarial networks and adversarial learning. Simple algorithms such as the gradient descent ascent (GDA) are the common practice for solving these nonconvex games and receive lots of empirical success. Yet, it is known that these vanilla GDA algorithms with constant stepsize can potentially diverge even in the convex setting. In this work, we show that for a subclass of nonconvex-nonconcave objectives satisfying a so-called two-sided Polyak-{\L}ojasiewicz inequality, the alternating gradient descent ascent (AGDA) algorithm converges globally at a linear rate and the stochastic AGDA achieves a sublinear rate. We further develop a variance reduced algorithm that attains a provably faster rate than AGDA when the problem has the finite-sum structure.